AAAI.2016 - Game Playing and Interactive Entertainment

Total: 2

#1 Poker-CNN: A Pattern Learning Strategy for Making Draws and Bets in Poker Games Using Convolutional Networks [PDF] [Copy] [Kimi]

Authors: Nikolai Yakovenko ; Liangliang Cao ; Colin Raffel ; James Fan

Poker is a family of card games that includes many varia- tions. We hypothesize that most poker games can be solved as a pattern matching problem, and propose creating a strong poker playing system based on a unified poker representa- tion. Our poker player learns through iterative self-play, and improves its understanding of the game by training on the results of its previous actions without sophisticated domain knowledge. We evaluate our system on three poker games: single player video poker, two-player Limit Texas Hold’em, and finally two-player 2-7 triple draw poker. We show that our model can quickly learn patterns in these very different poker games while it improves from zero knowledge to a competi- tive player against human experts. The contributions of this paper include: (1) a novel represen- tation for poker games, extendable to different poker vari- ations, (2) a Convolutional Neural Network (CNN) based learning model that can effectively learn the patterns in three different games, and (3) a self-trained system that signif- icantly beats the heuristic-based program on which it is trained, and our system is competitive against human expert players.

#2 Reuse of Neural Modules for General Video Game Playing [PDF] [Copy] [Kimi]

Authors: Alexander Braylan ; Mark Hollenbeck ; Elliot Meyerson ; Risto Miikkulainen

A general approach to knowledge transfer is introduced in which an agent controlled by a neural network adapts how it reuses existing networks as it learns in a new domain. Networks trained for a new domain can improve their performance by routing activation selectively through previously learned neural structure, regardless of how or for what it was learned. A neuroevolution implementation of this approach is presented with application to high-dimensional sequential decision-making domains. This approach is more general than previous approaches to neural transfer for reinforcement learning. It is domain-agnostic and requires no prior assumptions about the nature of task relatedness or mappings. The method is analyzed in a stochastic version of the Arcade Learning Environment, demonstrating that it improves performance in some of the more complex Atari 2600 games, and that the success of transfer can be predicted based on a high-level characterization of game dynamics.